Archive for Proceedings of the Royal Society

Galton’s 1904 paper in Nature

Posted in Books, Statistics, University life with tags , , , , , , , , , on October 11, 2022 by xi'an

Nature [28 September] posted an editorial apologizing for publishing Galton’s 1904 speech on Eugenics as part of “material that contributed to bias, exclusion and discrimination in research and society”. Apology that I do not find particularly pertinent from an historical viewpoint, given the massive time, academic, and societal distances we stand from this “paper”, which sounds more than a pamphlet than a scientific paper, by current standards. Reading these 1904 Nature articles show no more connection with a modern scientific journal than considering Isaac Newton’s alchemy notes in the early proceedings of the Royal Society.

“The aim of eugenics is to represent each class or sect by its best specimens; that done, to leave them to work out their common civilization in their own way.” F. Galton

Galton’s speech was published in extenso by the American Journal of Sociology, along with discussions from participants of the Sociological Society meeting. This set of discussions is rather illuminating as the views of the 1904 audience are quite heterogeneous, from complete adherence to a eugenic “golden’ future (see the zealous interventions of K. Pearson or B. Shaw] to misgivings about the ability to define the supposed ranking of members of society by worth or intelligence (H.G. Wells), to rejection that moral traits are genetically inherited (Mercier), to protests against the negation of individual freedom induced by a eugenic state (B. Kidd) and to common sense remarks that improvements in living conditions of the working classes were the key factor in improving society. But, overall, there was no disagreement therein on the very notion of races and on the supposed superiority of the Victorian civilization (with an almost complete exclusion of women from the picture), reflecting on the prejudices of the era and it is quite unlikely that this 1904 paper of Galton had any impact on these prejudices.

confidence in confidence

Posted in Statistics, University life with tags , , , , on June 8, 2022 by xi'an

[This is a ghost post that I wrote eons ago and which got lost in the meanwhile.]

Following the false confidence paper, Céline Cunen, Niels Hjort & Tore Schweder wrote a short paper in the same Proceedings A defending confidence distributions. And blame the phenomenon on Bayesian tools, which “might have unfortunate frequentist properties”. Which comes as no surprise since Tore Schweder and Nils Hjort wrote a book promoting confidence distributions for statistical inference.

“…there will never be any false confidence, and we can trust the obtained confidence! “

Their re-analysis of Balch et al (2019) is that using a flat prior on the location (of a satellite) leads to a non-central chi-square distribution as the posterior on the squared distance δ² (between two satellites). Which incidentally happens to be a case pointed out by Jeffreys (1939) against the use of the flat prior as δ² has a constant bias of d (the dimension of the space) plus the non-centrality parameter. And offers a neat contrast between the posterior, with non-central chi-squared cdf with two degrees of freedom

F(\delta)=\Gamma_2(\delta^2/\sigma^2;||y||^2/\sigma^2)

and the confidence “cumulative distribution”

C(\delta)=1-\Gamma_2(|y||^2/\sigma^2;\delta^2/\sigma^2)

Cunen et al (2020) argue that the frequentist properties of the confidence distribution 1-C(R), where R is the impact distance, are robust to an increasing σ when the true value is also R. Which does not seem to demonstrate much. A second illustration of B and C when the distance δ varies and both σ and |y|² are fixed is even more puzzling when the authors criticize the Bayesian credible interval for missing the “true” value of δ, as I find the statement meaningless for a fixed value of |y|²… Looking forward the third round!, i.e. a rebuttal by Balch et al (2019)

false confidence, not fake news!

Posted in Books, Statistics with tags , , , , , on May 28, 2021 by xi'an

“…aerospace researchers have recognized a counterintuitive phenomenon in satellite conjunction analysis, known as probability dilution. That is, as uncertainty in the satellite trajectories increases, the epistemic probability of collision eventually decreases. Since trajectory uncertainty is driven by errors in the tracking data, the seemingly absurd implication of probability dilution is that lower quality data reduce the risk of collision.”

In 2019, Balch, Martin, and Ferson published a false confidence theorem [false confidence, not false theorem!] in the Proceedings of the Royal [astatistical] Society, motivated by satellite conjunction (i.e., fatal encounter) analysis. But discussing in fine the very meaning of a confidence statement. And returning to the century old opposition between randomness and epistemic uncertainty, aleatory versus epistemic probabilities.

“…the counterintuitiveness of probability dilution calls this [use of epistemic probability] into question, especially considering [its] unsettled status in the statistics and uncertainty quantification communities.”

The practical aspect of the paper is unclear in that the opposition of aleatory versus epistemic probabilities does not really apply when the model connecting the observables with the position of the satellites is unknown. And replaced with a stylised parametric model. When ignoring this aspect of uncertainty, the debate is mostly moot.

“…the problem with probability dilution is not the mathematics (…) if (…)  inappropriate, that inappropriateness must be rooted in a mismatch between the mathematics of probability theory and the epistemic uncertainty to which they are applied in conjunction analysis.”

The probability dilution phenomenon as described in the paper is that, when (posterior) uncertainty increases, the posterior probability of collision eventually decreases, which makes sense since poor precision implies the observed distance is less trustworthy and the satellite could be anywhere. To conclude that increasing the prior or epistemic uncertainty makes the satellites safer from collision is thus fairly absurd as it only concerns the confidence in the statement that there will be a collision. But I agree with the conclusion that the statement of a low posterior probability is a misleading risk metric because, just like p-values, it is a.s. taken at face value. Bayes factors do relativise this statement [but are not mentioned in the paper]. But with the spectre of Lindley-Jeffreys paradox looming in the background.

The authors’ notion of false confidence is formally a highly probable [in the sample space] report of a high belief in a subset A of the parameter set when the true parameter does not belong to A. Which holds for all epistemic probabilities in the sense that there always exists such a set A. A theorem that I see as related to the fact that integrating an epistemic probability statement [conditional on the data x] wrt the true sampling distribution [itself conditional on the parameter θ] is not coherent from a probabilistic standpoint. The resolution of the paradox follows a principle set by Ryan Martin and Chuanhai Liu, such that “it is almost a tautology that a statistical approach satisfying this criterion will not suffer from the severe false confidence phenomenon”, although it sounds to me that this is a weak patch on a highly perforated tyre, the erroneous interpretation of probabilistic statements as frequentist ones.

%d bloggers like this: