Archive for Royal Society

false confidence, not fake news!

Posted in Books, Statistics with tags , , , , , on May 28, 2021 by xi'an

“…aerospace researchers have recognized a counterintuitive phenomenon in satellite conjunction analysis, known as probability dilution. That is, as uncertainty in the satellite trajectories increases, the epistemic probability of collision eventually decreases. Since trajectory uncertainty is driven by errors in the tracking data, the seemingly absurd implication of probability dilution is that lower quality data reduce the risk of collision.”

In 2019, Balch, Martin, and Ferson published a false confidence theorem [false confidence, not false theorem!] in the Proceedings of the Royal [astatistical] Society, motivated by satellite conjunction (i.e., fatal encounter) analysis. But discussing in fine the very meaning of a confidence statement. And returning to the century old opposition between randomness and epistemic uncertainty, aleatory versus epistemic probabilities.

“…the counterintuitiveness of probability dilution calls this [use of epistemic probability] into question, especially considering [its] unsettled status in the statistics and uncertainty quantification communities.”

The practical aspect of the paper is unclear in that the opposition of aleatory versus epistemic probabilities does not really apply when the model connecting the observables with the position of the satellites is unknown. And replaced with a stylised parametric model. When ignoring this aspect of uncertainty, the debate is mostly moot.

“…the problem with probability dilution is not the mathematics (…) if (…)  inappropriate, that inappropriateness must be rooted in a mismatch between the mathematics of probability theory and the epistemic uncertainty to which they are applied in conjunction analysis.”

The probability dilution phenomenon as described in the paper is that, when (posterior) uncertainty increases, the posterior probability of collision eventually decreases, which makes sense since poor precision implies the observed distance is less trustworthy and the satellite could be anywhere. To conclude that increasing the prior or epistemic uncertainty makes the satellites safer from collision is thus fairly absurd as it only concerns the confidence in the statement that there will be a collision. But I agree with the conclusion that the statement of a low posterior probability is a misleading risk metric because, just like p-values, it is a.s. taken at face value. Bayes factors do relativise this statement [but are not mentioned in the paper]. But with the spectre of Lindley-Jeffreys paradox looming in the background.

The authors’ notion of false confidence is formally a highly probable [in the sample space] report of a high belief in a subset A of the parameter set when the true parameter does not belong to A. Which holds for all epistemic probabilities in the sense that there always exists such a set A. A theorem that I see as related to the fact that integrating an epistemic probability statement [conditional on the data x] wrt the true sampling distribution [itself conditional on the parameter θ] is not coherent from a probabilistic standpoint. The resolution of the paradox follows a principle set by Ryan Martin and Chuanhai Liu, such that “it is almost a tautology that a statistical approach satisfying this criterion will not suffer from the severe false confidence phenomenon”, although it sounds to me that this is a weak patch on a highly perforated tyre, the erroneous interpretation of probabilistic statements as frequentist ones.

Bayes plaque

Posted in Books, pictures, Statistics, Travel, University life with tags , , , , , , , on November 22, 2019 by xi'an

AIQ [book review]

Posted in Books, Statistics with tags , , , , , , , , , , , , , , , , , , on January 11, 2019 by xi'an

AIQ was my Christmas day read, which I mostly read while the rest of the household was still sleeping. The book, written by two Bayesians, Nick Polson and James Scott, was published before the ISBA meeting last year, but I only bought it on my last trip to Warwick [as a Xmas present]. This is a pleasant book to read, especially while drinking tea by the fire!, well-written and full of facts and anecdotes I did not know or had forgotten (more below). Intended for a general audience, it is also quite light, from a technical side, rather obviously, but also from a philosophical side. While strongly positivist about the potential of AIs for the general good, it cannot be seen as an antidote to the doomlike Superintelligence by Nick Bostrom or the more factual Weapons of Maths Destruction by Cathy O’Neal. (Both commented on the ‘Og.)

Indeed, I find the book quite benevolent and maybe a wee bit too rosy in its assessment of AIs and the discussion on how Facebook and Russian intervention may have significantly to turn the White House Orange is missing [imho] the viral nature of the game, when endless loops of highly targeted posts can cut people from the most basic common sense. While the authors are “optimistic that, given the chance, people can be smart enough”, I do reflect on the sheer fact that the hoax that Hillary Clinton was involved in a child sex ring was ever considered seriously by people. To the point of someone shooting at the pizza restaurant. And I hence am much less optimistic at the ability for a large enough portion of the population, not even the majority, to keep a critical distance from the message carried by AI driven media. Similarly, while Nick and James point out (rather late in the book) that big data (meaning large data) is not necessarily good data for being unrepresentative at the population at large, they do not propose (in the book) highly convincing solutions to battle bias in existing and incoming AIs. Leading to a global worry that AIs may do well for a majority of the population and discriminate against a minority by the same reasoning. As described in Cathy O’Neal‘s book, and elsewhere, proprietary software does not even have to explain why it discriminates. More globally, the business school environment of the authors may have prevented them from stating a worry on the massive power grab by the AI-based companies, which genetically grow with little interest in democracy and states, as shown (again) by the recent election or their systematic fiscal optimisation. Or by the massive recourse to machine learning by Chinese authorities towards a social credit system grade for all citizens.

“La rage de vouloir conclure est une des manies les plus funestes et les plus stériles qui appartiennent à l’humanité. Chaque religion et chaque philosophie a prétendu avoir Dieu à elle, toiser l’infini et connaître la recette du bonheur.” Gustave Flaubert

I did not know about Henrietta Leavitt’s prediction rule for pulsating stars, behind Hubble’s discovery, which sounds like an astronomy dual to Rosalind Franklin’s DNA contribution. The use of Bayes’ rule for locating lost vessels is also found in The Theorem that would not die. Although I would have also mentioned its failure in locating Malaysia Airlines Flight 370. I had also never heard the great expression of “model rust. Nor the above quote from Flaubert. It seems I have recently spotted the story on how a 180⁰ switch in perspective on language understanding by machines brought the massive improvement that we witness today. But I cannot remember where. And I have also read about Newton missing the boat on the precision of the coinage accuracy (was it in Bryson’s book on the Royal Society?!), but with less neutral views on the role of Newton in the matter, as the Laplace of England would have benefited from keeping the lax measures of assessment.

Great to see friendly figures like Luke Bornn and Katherine Heller appearing in the pages. Luke for his work on the statistical analysis of basketball games, Katherine  for her work on predictive analytics in medicine. Reflecting on the missed opportunities represented by the accumulation of data on any patient throughout their life that is as grossly ignored nowadays as it was at Nightingale‘s time. The message of the chapter [on “The Lady with the Lamp”] may again be somewhat over-optimistic: while AI and health companies see clear incentives in developing more encompassing prediction and diagnostic techniques, this will only benefit patients who can afford the ensuing care. Which, given the state of health care systems in the most developed countries, is an decreasing proportion. Not to mention the less developed countries.

Overall, a nice read for the general public, de-dramatising the rise of the machines!, and mixing statistics and machine learning to explain the (human) intelligence behind the AIs. Nothing on the technical side, to be sure, but this was not the intention of the authors.

the first Bayesian

Posted in Statistics with tags , , , , , , , on February 20, 2018 by xi'an

In the first issue of Statistical Science for this year (2018), Stephen Stiegler pursues the origins of Bayesianism as attributable to Richard Price, main author of Bayes’ Essay. (This incidentally relates to an earlier ‘Og piece on that notion!) Steve points out the considerable inputs of Price on this Essay, even though the mathematical advance is very likely to be entirely Bayes’. It may however well be Price who initiated Bayes’ reflections on the matter, towards producing a counter-argument to Hume’s “On Miracles”.

“Price’s caution in addressing the probabilities of hypotheses suggested by data is rare in early literature.”

A section of the paper is about Price’s approach data-determined hypotheses and to the fact that considering such hypotheses cannot easily fit within a Bayesian framework. As stated by Price, “it would be improbable as infinite to one”. Which is a nice way to address the infinite mass prior.

 

Roberts and Speed elected to the Fellowship of the Royal Society

Posted in Books, Statistics, University life with tags , , , , , on May 4, 2013 by xi'an

I just found out that Gareth Roberts and Terry Speed have been elected as Fellows of the Royal Society (FRS). Congratulations to both for this prestigious recognition of their major contributions to Science! (Another Fellow elected this year is Bill Bryson, in recognition of his scientific popularisation books. Including one on the Royal Society I reviewed for CHANCE a few months ago.)