Archive for crime prediction

Yikes! “AI can predict which criminals may break laws again better than humans”

Posted in Books, pictures, Statistics with tags , , , , , , , on February 28, 2020 by xi'an

Science (the journal!) has this heading on its RSS feed page, which makes me wonder if they have been paying any attention to the well-documented issues with AI driven “justice”.

“…some research has given reason to doubt that algorithms are any better at predicting arrests than humans are.”

Among other issues, the study compared volunteers with COMPAS‘ or LSI-R predictive abilities for predicting violent crime behaviour, based on the same covariates. Volunteers, not experts! And the algorithms are only correct 80% of the time which is a terrible perfomance when someone’s time in jail depends on it!

“Since neither humans nor algorithms show amazing accuracy at predicting whether someone will commit a crime two years down the line, “should we be using [those forecasts] as a metric to determine whether somebody goes free?” Farid says. “My argument is no.””

we have never been unable to develop a reliable predictive model

Posted in Statistics with tags , , , , , , , , , , , , , , , on November 10, 2019 by xi'an

An alarming entry in The Guardian about the huge proportion of councils in the UK using machine-learning software to allocate benefits, detect child abuse or claim fraud. And relying blindly on the outcome of such software, despite their well-documented lack of reliability, uncertainty assessments, and warnings. Blindly in the sense that the impact of their (implemented) decision was not even reviewed, even though a portion of the councils does not consider renewing the contracts. With the appalling statement of the CEO of one software company reported in the title. Blaming further the lack of accessibility [for their company] of the data used by the councils for the impossibility [for the company] of providing risk factors and identifying bias, in an unbelievable newspeak inversion… As pointed out by David Spiegelhalter in the article, the openness should go the other way, namely that the algorithms behind the suggestions (read decisions) should be available to understand why these decisions were made. (A whole series of Guardian articles relate to this as well, under the heading “Automating poverty”.)

Hippocratic oath for maths?

Posted in Statistics with tags , , , , , , , , , , , , on August 23, 2019 by xi'an

On a free day in Nachi-Taksuura, I came across this call for a professional oath for mathematicians (and computer engineers and scientists in related fields). By UCL mathematician Hannah Fry. The theme is the same as with Weapons of math destruction, namely that algorithms have a potentially huge impact on everyone’s life and that those who design these algorithms should be accountable for it. And aware of the consequences when used by non-specialists. As illustrated by preventive justice software. And child abuse prediction software. Some form of ethics course should indeed appear in data science programs, for at least pointing out the limitations of automated decision making. However, I remain skeptical of the idea as (a) taking an oath does not mean an impossibility to breaking that oath, especially when one is blissfully unaware of breaking it (b) acting as ethically as possible should be part of everyone’s job, whether when designing deep learning algorithms or making soba noodles (c) the Hippocratic oath is mostly a moral statement that varies from place to place and from an epoch to the next (as, e.g., with respect to abortion which was prohibited in Hippocrates’ version) and does not prevent some doctors from engaging into unsavory activities. Or getting influenced by dug companies. And such an oath would not force companies to open-source their code, which in my opinion is a better way towards the assessment of such algorithms. The article does not mention either the Montréal Déclaration for a responsible AI, which goes further than a generic and most likely ineffective oath.

El asiedo [book review]

Posted in Books, pictures, Travel, Wines with tags , , , , , , , , , on January 13, 2018 by xi'an

Just finished this long book by Arturo Pérez-Reverte that I bought [in its French translation] after reading the fascinating Dos de Mayo about the rebellion of the people of Madrid against the Napoleonian occupants. This book, The Siege, is just fantastic, more literary than Dos de Mayo and a mix of different genres, from the military to the historical, to the criminal, to the chess, to the speculative, to the romantic novel..! There are a few major characters, a police investigator, a trading company head, a corsair, a French canon engineer, a guerilla, with a well-defined unique location, the city of Cádiz under [land] siege by the French troops, but with access to the sea thanks to the British Navy. The serial killer part is certainly not the best item in the plot [as often with serial killer stories!], as it slowly drifts to the supernatural, borrowing from Laplace and Condorcet to lead to perfect predictions of where and when French bombs will fall. The historical part also appears to be rather biased against the British forces, if this opinion page is to be believed, towards a nationalist narrative making the Spanish guerilla resistance bigger and stronger than it actually was. But I still read the story with fascination and it kept me awake past my usual bedtime for several nights as I could not let the story go!

To predict and serve?

Posted in Books, pictures, Statistics with tags , , , , , , , , , , , on October 25, 2016 by xi'an

Kristian Lum and William Isaac published a paper in Significance last week [with the above title] about predictive policing systems used in the USA and presumably in other countries to predict future crimes [and therefore prevent them]. This sounds like a good idea for a science fiction plot, à la Philip K Dick [in his short story, The Minority Report], but that it is used in real life definitely sounds frightening, especially when the civil rights of the targeted individuals are impacted. (Although some politicians in different democratic countries increasingly show increasing contempt for keeping everyone’ rights equal…) I also feel terrified by the social determinism behind the very concept of predicting crime from socio-economic data (and possibly genetic characteristics in a near future, bringing us back to the dark days of physiognomy!)

“…crimes that occur in locations frequented by police are more likely to appear in the database simply because that is where the police are patrolling.”

Kristian and William examine in this paper one statistical aspect of the police forces relying on crime prediction software, namely the bias in the data exploited by the software and in the resulting policing. (While the accountability of the police actions when induced by such software is not explored, this is obviously related to the Nature editorial of last week, “Algorithm and blues“, which [in short] calls for watchdogs on AIs and decision algorithms.) When the data is gathered from police and justice records, any bias in checks, arrests, and condemnations will be reproduced in the data and hence will repeat the bias in targeting potential criminals. As aptly put by the authors, the resulting machine learning algorithm will be “predicting future policing, not future crime.” Worse, by having no reservation about over-fitting [the more predicted crimes the better], it will increase the bias in the same direction. In the Oakland drug-user example analysed in the article, the police concentrates almost uniquely on a few grid squares of the city, resulting into the above self-predicting fallacy. However, I do not see much hope in using other surveys and datasets towards eliminating this bias, as they also carry their own shortcomings. Even without biases, predicting crimes at the individual level just seems a bad idea, for statistical and ethical reasons.

%d bloggers like this: