Quentin Gronau and Eric-Jan Wagenmakers just arXived a rather exotic paper in that it merges experimental mathematics with Bayesian inference. The mathematical question at stake here is whether or not one of the classical irrational constants like π, e or √2 are “normal”, that is, have the same frequency for all digits in their decimal expansion. This (still) is an open problem in mathematics. Indeed, the authors do not provide a definitive answer but instead run a Bayesian testing experiment on 100 million digits on π, ending up with a Bayes factor of 2×10³¹. The figure is massive, however one must account for the number of “observations” in the sample. (Which is not a statistical sample, strictly speaking.) While I do not think the argument will convince an algebraist (as the counterargument of knowing nothing about digits after the 10⁸th one is easy to formulate!), I am also uncertain of the relevance of this huge figure, as I am unable to justify a prior on the distribution of digits if the number is not normal. Since we do not even know whether there are
non-normal numbers outside rational numbers. While the flat Dirichlet prior is a uniform prior over the simplex, to assume that all possible probability repartitions are equally possible may not appeal to a mathematician, as far as I [do not] know! Furthermore, the multinomial model imposed on (at?) the series of digit of π does not have to agree with this “data” and discrepancies may as well be due to a poor sampling model as to an inappropriate prior. The data may more agree with H⁰ than with H¹ because the sampling model in H¹ is ill-suited. The paper also considers a second prior (or posterior prior) that I do not find particularly relevant.
For all I [do not] know, the huge value of the Bayes factor may be another avatar of the Lindley-Jeffreys paradox. In the sense of my interpretation of the phenomenon as a dilution of the prior mass over an unrealistically large space. Actually, the authors mention the paradox as well (p.5) but seemingly as a criticism of a frequentist approach. The picture above has its lower bound determined by a virtual dataset that produces a χ² statistic equal to the 95% χ² quantile. Dataset that stills produces a fairly high Bayes factor. (The discussion seems to assume that the Bayes factor is a one-to-one function of the χ² statistics, which is not correct I think. I wonder if exactly 95% of the sequence of Bayes factors stays within this band. There is no theoretical reason for this to happen of course.) Hence an illustration of the Lindley-Jeffreys paradox indeed, in its first interpretation of the clash between conclusions based on both paradigms. As a conclusion, I am thus not terribly convinced that this experiment supports the use of a Bayes factor for solving this normality hypothesis. Not that I support the alternative use of the p-value of course! As a sidenote, the pdf file I downloaded from arXiv has a slight bug that interacted badly with my printer in Warwick, as shown in the picture above.