weapons of math destruction [book review]
As I had read many comments and reviews about this book, including one by Arthur Charpentier, on Freakonometrics, I eventually decided to buy it from my Amazon Associate savings (!). With a strong a priori bias, I am afraid, gathered from reading some excerpts, comments, and the overall advertising about it. And also because the book reminded me of another quantic swan. Not to mention the title. After reading it, I am afraid I cannot tell my ascertainment has changed much.
“Models are opinions embedded in mathematics.” (p.21)
The core message of this book is that the use of algorithms and AI methods to evaluate and rank people is unsatisfactory and unfair. From predicting recidivism to fire high school teachers, from rejecting loan applications to enticing the most challenged categories to enlist for for-profit colleges. Which is indeed unsatisfactory and unfair. Just like using the h index and citation ranking for promotion or hiring. (The book mentions the controversial hiring of many adjunct faculty by KAU to boost its ranking.) But this conclusion is not enough of an argument to write a whole book. Or even to blame mathematics for the unfairness: as far as I can tell, mathematics has nothing to do with unfairness. Some analysts crunch numbers, produce a score, and then managers make poor decisions. The use of mathematics throughout the book is thus completely inappropriate, when the author means statistics, machine learning, data mining, predictive algorithms, neural networks, &tc. (OK, there is a small section on Operations Research on p.127, but I figure deep learning can bypass the maths.)
“To calculate risk, our team employed the Monte Carlo method (…) There was plenty to complain about with this method, but it was a simple way to get some handle on your risk.” (p.45)
The book is too much about the growing disillusionment of the author about her job experience first as a quant and then as a risk analyst. And not enough about the why and the how of the limitations of financial and econometric models used as if they were the real world. And set in stone, while the real world has no reason to be predictable. Or static.
“The data scientists start off with a Bayesian approach, which in statistics is pretty close to plain vanilla. The point of Bayesian analysis is to rank the variables with the most impact.” (p.74)
In a long rant on far from perfect predictive models, the author goes on and on about college ranking by newspapers and the self-feeding of those rankings, plus the predatory features of for-profit colleges that relate much more to psychology than to maths. (If someone could explain the above quote to me, I would appreciate!) Those parts are much more about anecdotes than the lack of predictive abilities of the models used for those rankings. For instance, the scam associated with these for-profit colleges has mainly to do with the gullibility of people taking personalised advertising at face value: this is aggressive and dishonest marketing, full stop.
“It’s one more example in which the wealthy and informed get the edge and the poor are more likely to lose out.” (p.114)
The criticism of the recidivism prediction and crime prediction software spots the blind corners and biases in such tools, as well as the confounding factor of poverty. Plus the over-fitting nature of the game that pushes towards even more bias, as discussed in a recent post about a connected paper by Kristian Lum and William Isaac. (Also mentioning Minority Report, predictably, albeit the movie, not the book.) The same bias impacts the résumé sifting systems, meaning some people are unable to get low qualification jobs at companies because they all use the same software. More unfairness and exclusion. However, the overall tone of the book is reflected in the above quote, namely that the “system” is taking advantage of those new classification and prediction tools to reinforce inequalities. While I strongly agree that the “system” should instead strive to reduce inequalities, I do no see a clear connection between the rise of inequalities and the use of such devious algorithms. Thomas Piketty would put it in much better terms than I, but the current state of capitalism pushes for ever increasing returns on capital and ever decreasing relevance of labour, which in turn leads to optimisation systems that treat the human factor as a nuisance. With the help of algorithms or not. But certainly with the help of an ever decreasing leverage of States and judicial institutions, which are mostly unable to counteract the use of unidimensional ranking criteria at all levels of society, including justice, police, and education. In short, which cannot impose [more] fairness on itself and others. As illustrated in the example of the scheduling software (Chapter 7) or in the insane zero hour contracts in the UK. (Or the amazing fact [Chapter 8] that in most US States employers can ask from prospective employees their credit record!)
“Statistically speaking, the administrators moved from a primary to a secondary model (..) The result is a model with loads of random results, what statisticians call `noise’.” (p.137)
Another quote that bemuses me. This occurs in a part of the book that (rightly) punches hard at the teachers ranking method, when students’ grades are compared with their expected or predicted grade. One correct criticism is that the difference is most likely not significant. Another one (found elsewhere in the book) is that the model is only a model, hence simplifies, meaning less variability than in the real phenomenon and, worse, some bias due to missing factors or changing conditions impacting grades. Yet another criticism (found at this stage) is that a model built on a population of 80,000 eight-graders does not apply to a classroom of thirty students because they cannot “match up with the larger population” (p.138), which I find particularly lame as you could say the same for any statistical model applied to any group towards prediction. Again and again, the underlying argument that emerges is that models cannot identify “the one person” for which models fail. The outlier. The living example that models are imperfect, that they are constructed on average behaviour in the entire population, with no way to account for “divergent” behaviours. (In a sense this connects with Keynes‘ ultimate (and barren) criticism of statistics, namely that a model could not incorporate all parameters. Which ended up his connection with statistics and his move to the much more reliable science of economics…) Despite the assertion to the contrary: “Mathematicians didn’t pretend to foresee the fate of each individual. This was unknowable.” (p.163)
“This is a far cry from insurance’s original purpose, which was to help society balance its risk.” (p.171)
Another illustration is the chapter about insurance companies moving towards more and more individualised premiums. And possibly imposing monitors in every car. The complaint that this is very different from the mutualised risk of the early days, as e.g. in workers’ unions, does not ring true as insurance companies appeared very early and were obviously intent on making money from the start. I presume it was only a certain degree of State regulation that could keep them from over-charging. And I fail to see why using a smaller number of categories in the past was fairer, as it was still discriminating according to age, sex and past driving history.
“we might consider moving towards the European model, which stipulates that any data collected must be approved by the user” (p.214)
In conclusion, I remain unimpressed by this book, not so much because of its ideological tone in that I mostly share the feelings expressed by the author about injustice and the exploitation of the poor and the uninformed, but because I see those machine learning algorithms as anecdotal in the big picture, when multinationals and administrations can discriminate and exclude at will without being truly accountable. Calling for an Hippocratic oath for Big Data or a regulatory body for machine learning systems sounds very remote from solving the problem. Making companies and administrations accountable for their decisions, rather than hiding behind the machine, would be a progress of sorts. More regulation. More data and privacy protection. And more importantly a buffer between the conclusion of a model, no matter how accurate it is, and the decision impacting the people behind. If anything, this book shows a much more realistic concern about the AIs taking over, much more than in the recent warnings found e.g. in Superintelligence, as I discussed a year ago. Not the AIs taking over, thus, but some human all too human powers taking advantage of those tools (or of their machine-hence-objective image) to keep their dominance over some fractions of society.
Leave a Reply