AIQ [book review]

AIQ was my Christmas day read, which I mostly read while the rest of the household was still sleeping. The book, written by two Bayesians, Nick Polson and James Scott, was published before the ISBA meeting last year, but I only bought it on my last trip to Warwick [as a Xmas present]. This is a pleasant book to read, especially while drinking tea by the fire!, well-written and full of facts and anecdotes I did not know or had forgotten (more below). Intended for a general audience, it is also quite light, from a technical side, rather obviously, but also from a philosophical side. While strongly positivist about the potential of AIs for the general good, it cannot be seen as an antidote to the doomlike Superintelligence by Nick Bostrom or the more factual Weapons of Maths Destruction by Cathy O’Neal. (Both commented on the ‘Og.)

Indeed, I find the book quite benevolent and maybe a wee bit too rosy in its assessment of AIs and the discussion on how Facebook and Russian intervention may have significantly to turn the White House Orange is missing [imho] the viral nature of the game, when endless loops of highly targeted posts can cut people from the most basic common sense. While the authors are “optimistic that, given the chance, people can be smart enough”, I do reflect on the sheer fact that the hoax that Hillary Clinton was involved in a child sex ring was ever considered seriously by people. To the point of someone shooting at the pizza restaurant. And I hence am much less optimistic at the ability for a large enough portion of the population, not even the majority, to keep a critical distance from the message carried by AI driven media. Similarly, while Nick and James point out (rather late in the book) that big data (meaning large data) is not necessarily good data for being unrepresentative at the population at large, they do not propose (in the book) highly convincing solutions to battle bias in existing and incoming AIs. Leading to a global worry that AIs may do well for a majority of the population and discriminate against a minority by the same reasoning. As described in Cathy O’Neal‘s book, and elsewhere, proprietary software does not even have to explain why it discriminates. More globally, the business school environment of the authors may have prevented them from stating a worry on the massive power grab by the AI-based companies, which genetically grow with little interest in democracy and states, as shown (again) by the recent election or their systematic fiscal optimisation. Or by the massive recourse to machine learning by Chinese authorities towards a social credit system grade for all citizens.

“La rage de vouloir conclure est une des manies les plus funestes et les plus stériles qui appartiennent à l’humanité. Chaque religion et chaque philosophie a prétendu avoir Dieu à elle, toiser l’infini et connaître la recette du bonheur.” Gustave Flaubert

I did not know about Henrietta Leavitt’s prediction rule for pulsating stars, behind Hubble’s discovery, which sounds like an astronomy dual to Rosalind Franklin’s DNA contribution. The use of Bayes’ rule for locating lost vessels is also found in The Theorem that would not die. Although I would have also mentioned its failure in locating Malaysia Airlines Flight 370. I had also never heard the great expression of “model rust. Nor the above quote from Flaubert. It seems I have recently spotted the story on how a 180⁰ switch in perspective on language understanding by machines brought the massive improvement that we witness today. But I cannot remember where. And I have also read about Newton missing the boat on the precision of the coinage accuracy (was it in Bryson’s book on the Royal Society?!), but with less neutral views on the role of Newton in the matter, as the Laplace of England would have benefited from keeping the lax measures of assessment.

Great to see friendly figures like Luke Bornn and Katherine Heller appearing in the pages. Luke for his work on the statistical analysis of basketball games, Katherine  for her work on predictive analytics in medicine. Reflecting on the missed opportunities represented by the accumulation of data on any patient throughout their life that is as grossly ignored nowadays as it was at Nightingale‘s time. The message of the chapter [on “The Lady with the Lamp”] may again be somewhat over-optimistic: while AI and health companies see clear incentives in developing more encompassing prediction and diagnostic techniques, this will only benefit patients who can afford the ensuing care. Which, given the state of health care systems in the most developed countries, is an decreasing proportion. Not to mention the less developed countries.

Overall, a nice read for the general public, de-dramatising the rise of the machines!, and mixing statistics and machine learning to explain the (human) intelligence behind the AIs. Nothing on the technical side, to be sure, but this was not the intention of the authors.

3 Responses to “AIQ [book review]”

  1. Tom Loredo Says:

    Re: Henrietta Leavitt’s work, in 2009 the American Astronomical Society (AAS) passed a resolution requesting that the Cepheid variable star “period-luminosity relation” (the prediction rule you cite) be named the “Leavitt law.” This doesn’t quite have the force of law (ahem), however. It’s the International Astronomical Union (IAU) that is bestowed with the authority for naming astronomical bodies and laws. They have yet to act on this. In the meantime, in the US, “Leavitt law” has quickly taken hold. I’m doing research on it, in fact, and that’s the name my team uses (including a UK colleague!).

    But perhaps of greater interest to a French scientists 8-): the IAU recently *did* act to rename a more famous law. In fact, it’s the cosmological expansion law traditionally attributed to Hubble, that you referred to. It’s now called the Hubble-Lemaître law, a tribute to the French Jesuit, Georges Lemaître. The law was first derived theoretically by Alexander Friedman. Fr. Lemaître clarified the theory, and then used some galaxy distance & velocity data (using Leavitt’s law) to estimate the expansion rate (the Hubble parameter), demonstrating that the predicted expansion was in fact occuring. He published in an obscure journal, and the work went unnoticed. Being a humble fellow, he didn’t fuss about it when Hubble later presented a similar analysis of the data. Here’s one of many tellings of the story in the press: And here’s a good talk on it (a U. Chicago public lecture) by one of my Cornell colleagues (and a co-founder of the Society of Catholic Scientists):

    • Ha, ha, sounds like an astronomy parallel to the Cramèr-Rao bound, which is almost never called the Fréchet-Darmois-Cramèr-Rao bound, and the Pitman-Koopman Lemma, less known as the Pitman-Koopman-Darmois Lemma. Being published in French did not help indeed, even in the Proceedings of the French Academy of Science (CRAS) for Darmois. Publishing during the second world war in the Revue de l’ISUP (which was my first position in Paris) did not help either. I learned from his Wikipedia page that Lemaître also developed the theory of quaternions from first principles and was one of the inventors of the fast Fourier transform. (Akin to Fréchet, he served in the trenches during the first world war.)

    • On the side, I idly wonder at the point of having a Society of name-a-religion Scientists. Isn’t Science a universal and unifying concept?

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.