Archive for artificial intelligence

the Montréal declarAIon

Posted in University life with tags , , , , , , , , , , , , on April 27, 2019 by xi'an

In conjunction with Yoshua Bengio being one of the three recipients of the 2018 Alan Turing award, Nature ran an interview of him about the Montréal Déclaration for a responsible AI, which he launched at NeurIPS last December.

“Self-regulation is not going to work. Do you think that voluntary taxation works? It doesn’t.”

Reflecting on the dangers of abuse of and by AIs, from surveillance, to discrimination, but being somewhat unclear on the means to implement the ten generic principles listed there. (I had missed the Declaration when it came up.) I agree with the principles stressed by this list, well-being, autonomy, privacy, democracy, diversity, prudence, responsability, and sustainability, it remains to be seem how they can be imposed upon corporations whose own public image puts more restraint on them than ethics or on governments that are all too ready to automatise justice, police, and the restriction of citizen’s rights. Which makes the construction of a responsible AI institution difficult to imagine, if the current lack of outreach of the extra-national institutions is the gauge. (A striking coincidence is that, when  Yoshua Bengio pointed out that France was trying to make Europe an AI power, there was also a tribune in Le Monde about the lack of practical impact of this call to arms, apart from more academics moving to half-time positions in private companies.) [Warning: the picture does not work well with the dark background theme of this blog.]

Bayesian intelligence in Warwick

Posted in pictures, Statistics, Travel, University life, Wines with tags , , , , , , , , , , , , on February 18, 2019 by xi'an

This is an announcement for an exciting CRiSM Day in Warwick on 20 March 2019: with speakers

10:00-11:00 Xiao-Li Meng (Harvard): “Artificial Bayesian Monte Carlo Integration: A Practical Resolution to the Bayesian (Normalizing Constant) Paradox”

11:00-12:00 Julien Stoehr (Dauphine): “Gibbs sampling and ABC”

14:00-15:00 Arthur Ulysse Jacot-Guillarmod (École Polytechnique Fedérale de Lausanne): “Neural Tangent Kernel: Convergence and Generalization of Deep Neural Networks”

15:00-16:00 Antonietta Mira (Università della Svizzera italiana e Università degli studi dell’Insubria): “Bayesian identifications of the data intrinsic dimensions”

[whose abstracts are on the workshop webpage] and free attendance. The title for the workshop mentions Bayesian Intelligence: this obviously includes human intelligence and not just AI!

Nature Outlook on AI

Posted in Statistics with tags , , , , , , , , , , , , , , , on January 13, 2019 by xi'an

The 29 November 2018 issue of Nature had a series of papers on AIs (in its Outlook section). At the general public (awareness) level than in-depth machine-learning article. Including one on the forecasted consequences of ever-growing automation on jobs, quoting from a 2013 paper by Carl Frey and Michael Osborne [of probabilistic numerics fame!] that up to 47% of US jobs could become automated. The paper is inconclusive on how taxations could help in or deter from transfering jobs to other branches, although mentioning the cascading effect of taxing labour and subsidizing capital. Another article covers the progresses in digital government, with Estonia as a role model, including the risks of hacking (but not mentioning Russia’s state driven attacks). Differential privacy is discussed as a way to keep data “secure” (but not cryptography à la Louis Aslett!). With another surprising entry that COBOL is still in use in some administrative systems. Followed by a paper on the apparently limited impact of digital technologies on mental health, despite the advertising efforts of big tech companies being described as a “race to the bottom of the brain stem”! And another one on (overblown) public expectations on AIs, although the New York Time had an entry yesterday on people in Arizona attacking self-driving cars with stones and pipes… Plus a paper on the growing difficulties of saving online documents and culture for the future (although saving all tweets ever published does not sound like a major priority to me!).

Interesting (?) aside, the same issue contains a general public article on the use of AIs for peer reviews (of submitted papers). The claim being that “peer review by artificial intelligence (AI) is promising to improve the process, boost the quality of published papers — and save reviewers time.” A wee bit over-optimistic, I would say, as the developed AI’s are at best “that statistics and methods in manuscripts are sound”. For instance, producing “key concepts to summarize what the paper is about” is not particularly useful. A degree of innovation compared with the existing would be. Or an automated way to adapt the paper style to the strict and somewhat elusive Biometrika style!

AIQ [book review]

Posted in Books, Statistics with tags , , , , , , , , , , , , , , , , , , on January 11, 2019 by xi'an

AIQ was my Christmas day read, which I mostly read while the rest of the household was still sleeping. The book, written by two Bayesians, Nick Polson and James Scott, was published before the ISBA meeting last year, but I only bought it on my last trip to Warwick [as a Xmas present]. This is a pleasant book to read, especially while drinking tea by the fire!, well-written and full of facts and anecdotes I did not know or had forgotten (more below). Intended for a general audience, it is also quite light, from a technical side, rather obviously, but also from a philosophical side. While strongly positivist about the potential of AIs for the general good, it cannot be seen as an antidote to the doomlike Superintelligence by Nick Bostrom or the more factual Weapons of Maths Destruction by Cathy O’Neal. (Both commented on the ‘Og.)

Indeed, I find the book quite benevolent and maybe a wee bit too rosy in its assessment of AIs and the discussion on how Facebook and Russian intervention may have significantly to turn the White House Orange is missing [imho] the viral nature of the game, when endless loops of highly targeted posts can cut people from the most basic common sense. While the authors are “optimistic that, given the chance, people can be smart enough”, I do reflect on the sheer fact that the hoax that Hillary Clinton was involved in a child sex ring was ever considered seriously by people. To the point of someone shooting at the pizza restaurant. And I hence am much less optimistic at the ability for a large enough portion of the population, not even the majority, to keep a critical distance from the message carried by AI driven media. Similarly, while Nick and James point out (rather late in the book) that big data (meaning large data) is not necessarily good data for being unrepresentative at the population at large, they do not propose (in the book) highly convincing solutions to battle bias in existing and incoming AIs. Leading to a global worry that AIs may do well for a majority of the population and discriminate against a minority by the same reasoning. As described in Cathy O’Neal‘s book, and elsewhere, proprietary software does not even have to explain why it discriminates. More globally, the business school environment of the authors may have prevented them from stating a worry on the massive power grab by the AI-based companies, which genetically grow with little interest in democracy and states, as shown (again) by the recent election or their systematic fiscal optimisation. Or by the massive recourse to machine learning by Chinese authorities towards a social credit system grade for all citizens.

“La rage de vouloir conclure est une des manies les plus funestes et les plus stériles qui appartiennent à l’humanité. Chaque religion et chaque philosophie a prétendu avoir Dieu à elle, toiser l’infini et connaître la recette du bonheur.” Gustave Flaubert

I did not know about Henrietta Leavitt’s prediction rule for pulsating stars, behind Hubble’s discovery, which sounds like an astronomy dual to Rosalind Franklin’s DNA contribution. The use of Bayes’ rule for locating lost vessels is also found in The Theorem that would not die. Although I would have also mentioned its failure in locating Malaysia Airlines Flight 370. I had also never heard the great expression of “model rust. Nor the above quote from Flaubert. It seems I have recently spotted the story on how a 180⁰ switch in perspective on language understanding by machines brought the massive improvement that we witness today. But I cannot remember where. And I have also read about Newton missing the boat on the precision of the coinage accuracy (was it in Bryson’s book on the Royal Society?!), but with less neutral views on the role of Newton in the matter, as the Laplace of England would have benefited from keeping the lax measures of assessment.

Great to see friendly figures like Luke Bornn and Katherine Heller appearing in the pages. Luke for his work on the statistical analysis of basketball games, Katherine  for her work on predictive analytics in medicine. Reflecting on the missed opportunities represented by the accumulation of data on any patient throughout their life that is as grossly ignored nowadays as it was at Nightingale‘s time. The message of the chapter [on “The Lady with the Lamp”] may again be somewhat over-optimistic: while AI and health companies see clear incentives in developing more encompassing prediction and diagnostic techniques, this will only benefit patients who can afford the ensuing care. Which, given the state of health care systems in the most developed countries, is an decreasing proportion. Not to mention the less developed countries.

Overall, a nice read for the general public, de-dramatising the rise of the machines!, and mixing statistics and machine learning to explain the (human) intelligence behind the AIs. Nothing on the technical side, to be sure, but this was not the intention of the authors.

algorithm for predicting when kids are in danger [guest post]

Posted in Books, Kids, Statistics with tags , , , , , , , , , , , , , , , , , on January 23, 2018 by xi'an

[Last week, I read this article in The New York Times about child abuse prediction software and approached Kristian Lum, of HRDAG, for her opinion on the approach, possibly for a guest post which she kindly and quickly provided!]

A week or so ago, an article about the use of statistical models to predict child abuse was published in the New York Times. The article recounts a heart-breaking story of two young boys who died in a fire due to parental neglect. Despite the fact that social services had received “numerous calls” to report the family, human screeners had not regarded the reports as meeting the criteria to warrant a full investigation. Offered as a solution to imperfect and potentially biased human screeners is the use of computer models that compile data from a variety of sources (jails, alcohol and drug treatment centers, etc.) to output a predicted risk score. The implication here is that had the human screeners had access to such technology, the software might issued a warning that the case was high risk and, based on this warning, the screener might have sent out investigators to intervene, thus saving the children.

These types of models bring up all sorts of interesting questions regarding fairness, equity, transparency, and accountability (which, by the way, are an exciting area of statistical research that I hope some readers here will take up!). For example, most risk assessment models that I have seen are just logistic regressions of [characteristics] on [indicator of undesirable outcome]. In this case, the outcome is likely an indicator of whether child abuse had been determined to take place in the home or not. This raises the issue of whether past determinations of abuse– which make up  the training data that is used to make the risk assessment tool–  are objective, or whether they encode systemic bias against certain groups that will be passed through the tool to result in systematically biased predictions. To quote the article, “All of the data on which the algorithm is based is biased. Black children are, relatively speaking, over-surveilled in our systems, and white children are under-surveilled.” And one need not look further than the same news outlet to find cases in which there have been egregiously unfair determinations of abuse, which disproportionately impact poor and minority communities.  Child abuse isn’t my immediate area of expertise, and so I can’t responsibly comment on whether these types of cases are prevalent enough that the bias they introduce will swamp the utility of the tool.

At the end of the day, we obviously want to prevent all instances of child abuse, and this tool seems to get a lot of things right in terms of transparency and responsible use. And according to the original article, it (at least on the surface) seems to be effective at more efficiently allocating scarce resources to investigate reports of child abuse. As these types of models become used more and more for a wider variety of prediction types, we need to be cognizant that (to quote my brilliant colleague, Josh Norkin) we don’t “lose sight of the fact that because this system is so broken all we are doing is finding new ways to sort our country’s poorest citizens. What we should be finding are new ways to lift people out of poverty.”

minibatch acceptance for Metropolis-Hastings

Posted in Books, Statistics with tags , , , , , on January 12, 2018 by xi'an

An arXival that appeared last July by Seita, Pan, Chen, and Canny, and that relates to my current interest in speeding up MCMC. And to 2014 papers by  Korattikara et al., and Bardenet et al. Published in Uncertainty in AI by now. The authors claim that their method requires less data per iteration than earlier ones…

“Our test is applicable when the variance (over data samples) of the log probability ratio between the proposal and the current state is less than one.”

By test, the authors mean a mini-batch formulation of the Metropolis-Hastings acceptance ratio in the (special) setting of iid data. First they use Barker’s version of the acceptance probability instead of Metropolis’. Second, they use a Gaussian approximation to the distribution of the logarithm of the Metropolis ratio for the minibatch, while the Barker acceptance step corresponds to comparing a logistic perturbation of the logarithm of the Metropolis ratio against zero. Which amounts to compare the logarithm of the Metropolis ratio for the minibatch, perturbed by a logistic minus Normal variate. (The cancellation of the Normal in eqn (13) is a form of fiducial fallacy, where the Normal variate has two different meanings. In other words, the difference of two Normal variates is not equal to zero.) However, the next step escapes me as the authors seek to optimise the distribution of this logistic minus Normal variate. Which I thought was uniquely defined as such a difference. Another constraint is that the estimated variance of the log-likelihood ratio gets below one. (Why one?) The argument is that the average of the individual log-likelihoods is approximately Normal by virtue of the Central Limit Theorem. Even when randomised. While the illustrations on a Gaussian mixture and on a logistic regression demonstrate huge gains in computational time, it is unclear to me to which amount one can trust the approximation for a given model and sample size…

Children of Time [book review]

Posted in Books, pictures, Travel with tags , , , , , , , , , , on October 8, 2017 by xi'an

I came by this book in the common room of the mathematics department of the University of Warwick, which I visit regularly during my stays there, for it enjoys a book sharing box where I leave the books I’ve read (and do not want to carry back to Paris) and where I check for potential catches… One of these books was Tchaikovsky’s children of time, a great space-opera novel à la Arthur C Clarke, which got the 2016 Arthur C Clarke award, deservedly so (even though I very much enjoyed the long way to a small angry planet, Tchaikosky’s book is much more of an epic cliffhanger where the survival of an entire race is at stake). The children of time are indeed the last remnants of the human race, surviving in an artificial sleep aboard an ancient spaceship that irremediably deteriorates. Until there is no solution but landing on a terraformed planet created eons ago. And defended by an AI spanned (or spammed) by the scientist in charge of the terra-formation, who created a virus that speeds up evolution, with unintended consequences. Given that the strength of the book relies on these consequences, I cannot get into much details about the alternative pathway to technology (incl. artificial intelligence) followed by the inhabitants of this new world, and even less about the conclusive chapters that make up for a rather slow progression towards this final confrontation. An admirable and deep book I will most likely bring back to the common room on my next trip to Warwick! (As an aside I wonder if the title was chosen in connection with Goya’s picture of Chronus [Time] devouring his children…)