**O**ur random forest paper was alas rejected last week. Alas because I think the approach is a significant advance in ABC methodology when implemented for model choice, avoiding the delicate selection of summary statistics and the report of shaky posterior probability approximation. Alas also because the referees somewhat missed the point, apparently perceiving random forests as a way to project a large collection of summary statistics on a limited dimensional vector as in the Read Paper of Paul Fearnhead and Dennis Prarngle, while the central point in using random forests is the avoidance of a selection or projection of summary statistics. They also dismissed ou approach based on the argument that the reduction in error rate brought by random forests over LDA or standard (k-nn) ABC is “marginal”, which indicates a degree of misunderstanding of what the classification error stand for in machine learning: the maximum possible gain in supervised learning with a large number of classes cannot be brought arbitrarily close to zero. Last but not least, the referees did not appreciate why we mostly cannot trust posterior probabilities produced by ABC model choice and hence why the posterior error loss is a valuable and almost inevitable machine learning alternative, dismissing the posterior expected loss as being *not Bayesian enough* (or at all), for “averaging over hypothetical datasets” (which is a replicate of Jeffreys‘ famous criticism of p-values)! Certainly a first time for me to be rejected based on this argument!

## Archive for Harold Jeffreys

## not Bayesian enough?!

Posted in Books, Statistics, University life with tags ABC, ABC model choice, Bayesian Analysis, classification, Harold Jeffreys, random forests, Read paper, summary statistics on January 23, 2015 by xi'an## Harold Jeffreys’ default Bayes factor [for psychologists]

Posted in Books, Statistics, University life with tags Bayesian hypothesis testing, Dickey-Savage ratio, Harold Jeffreys, overfitting, Statistical Science, testing, Theory of Probability on January 16, 2015 by xi'an*“One of Jeffr**eys’ goals was to create default Bayes factors by using prior distributions that obeyed a series of general desiderata.”*

**T**he paper *Harold Jeffreys’s default Bayes factor hypothesis tests: explanation, extension, and application in Psychology* by Alexander Ly, Josine Verhagen, and Eric-Jan Wagenmakers is both a survey and a reinterpretation *cum* explanation of Harold Jeffreys‘ views on testing. At about the same time, I received a copy from Alexander and a copy from the journal it had been submitted to! This work starts with a short historical entry on Jeffreys’ work and career, which includes four of his principles, quoted *verbatim* from the paper:

- “scientific progress depends primarily on induction”;
- “in order to formalize induction one requires a logic of partial belief” [enters the Bayesian paradigm];
- “scientific hypotheses can be assigned prior plausibility in accordance with their complexity” [a.k.a., Occam’s razor];
- “classical “Fisherian” p-values are inadequate for the purpose of hypothesis testing”.

“The choice of π(σ) therefore irrelevant for the Bayes factor as long as we use the same weighting function in both models”

A very relevant point made by the authors is that Jeffreys *only* considered embedded or nested hypotheses, a fact that allows for having common parameters between models and hence some form of reference prior. Even though (a) I dislike the notion of “common” parameters and (b) I do not think it is entirely legit (I was going to write proper!) from a mathematical viewpoint to use the same (improper) prior on both sides, as discussed in our Statistical Science paper. And in our most recent alternative proposal. The most delicate issue however is to derive a reference prior on the parameter of interest, which is *fixed* under the null and *unknown* under the alternative. Hence preventing the use of improper priors. Jeffreys tried to calibrate the corresponding prior by imposing asymptotic consistency under the alternative. And exact indeterminacy under “completely uninformative” data. Unfortunately, this is not a well-defined notion. In the normal example, the authors recall and follow the proposal of Jeffreys to use an improper prior π(σ)∝1/σ on the nuisance parameter and argue in his defence the quote above. I find this argument quite weak because suddenly the prior on σ becomes a *weighting function..*. A notion foreign to the Bayesian cosmology. If we use an improper prior for π(σ), the marginal likelihood on the data is no longer a probability density and I do not buy the argument that one should use the *same* measure with the *same* constant both on σ alone [for the nested hypothesis] and on the σ part of (μ,σ) [for the nesting hypothesis]. We are considering two spaces with different dimensions and hence orthogonal measures. This quote thus sounds more like wishful thinking than like a justification. Similarly, the assumption of independence between δ=μ/σ and σ does not make sense for σ-finite measures. Note that the authors later point out that (a) the posterior on σ varies between models despite using the *same* data [which shows that the parameter σ is far from common to both models!] and (b) the [testing] Cauchy prior on δ is only useful for the testing part and should be replaced with another [estimation] prior when the model has been selected. Which may end up as a backfiring argument about this default choice.

“Each updated weighting function should be interpreted as a posterior in estimating σ within their own context, the model.”

The re-derivation of Jeffreys’ conclusion that a Cauchy prior should be used on δ=μ/σ makes it clear that this choice only proceeds from an imperative of fat tails in the prior, without solving the calibration of the Cauchy scale. (Given the now-available modern computing tools, it would be nice to see the impact of this scale γ on the numerical value of the Bayes factor.) And maybe it also proceeds from a “hidden agenda” to achieve a Bayes factor that *solely* depends on the *t* statistic. Although this does not sound like a compelling reason to me, since the *t* statistic is not sufficient in this setting.

In a differently interesting way, the authors mention the Savage-Dickey ratio (p.16) as a way to represent the Bayes factor for nested models, without necessarily perceiving the mathematical difficulty with this ratio that we pointed out a few years ago. For instance, in the psychology example processed in the paper, the test is between δ=0 and δ≥0; however, if I set π(δ=0)=0 under the alternative prior, which should not matter *[from a measure-theoretic perspective where the density is uniquely defined almost everywhere]*, the Savage-Dickey representation of the Bayes factor returns zero, instead of 9.18!

“In general, the fact that different priors result in different Bayes factors should not come as a surprise.”

The second example detailed in the paper is the test for a zero Gaussian correlation. This is a sort of “ideal case” in that the parameter of interest is between -1 and 1, hence makes the choice of a uniform U(-1,1) easy or easier to argue. Furthermore, the setting is also “ideal” in that the Bayes factor simplifies down into a marginal over the sample correlation only, under the usual Jeffreys priors on means and variances. So we have a second case where the frequentist statistic behind the frequentist test[ing procedure] is also the single (and insufficient) part of the data used in the Bayesian test[ing procedure]. Once again, we are in a setting where Bayesian and frequentist answers are in one-to-one correspondence (at least for a fixed sample size). And where the Bayes factor allows for a closed form through hypergeometric functions. Even in the one-sided case. (This is a result obtained by the authors, not by Jeffreys who, as the proper physicist he was, obtained approximations that are remarkably accurate!)

“The fact that the Bayes factor is independent of the intention with which the data have been collected is of considerable practical importance.”

The authors have a side argument in this section in favour of the Bayes factor against the p-value, namely that the “Bayes factor does not depend on the sampling plan” (p.29), but I find this fairly weak (or tongue in cheek) as the Bayes factor *does* depend on the sampling distribution imposed on top of the data. It appears that the argument is mostly used to defend sequential testing.

“The Bayes factor (…) balances the tension between parsimony and goodness of fit, (…) against overfitting the data.”

*In fine*, I liked very much this re-reading of Jeffreys’ approach to testing, maybe the more because I now think we should get away from it! I am not certain it will help in convincing psychologists to adopt Bayes factors for assessing their experiments as it may instead frighten them away. And it does not bring an answer to the vexing issue of the relevance of point null hypotheses. But it constitutes a lucid and innovative of the major advance represented by Jeffreys’ formalisation of Bayesian testing.

## “an outstanding paper that covers the Jeffreys-Lindley paradox”…

Posted in Statistics, University life with tags Aris Spanos, Bayesian model choice, Deborah Mayo, Error-Statistical philosophy, Harold Jeffreys, Jeffreys-Lindley paradox, Philosophy of Science, referee, severity on December 4, 2013 by xi'an

“This is, in this revised version, an outstanding paper that covers the Jeffreys-Lindley paradox (JLP) in exceptional depth and that unravels the philosophical differences between different schools of inference with the help of the JLP. From the analysis of this paradox, the author convincingly elaborates the principles of Bayesian and severity-based inferences, and engages in a thorough review of the latter’s account of the JLP in Spanos (2013).” Anonymous

**I** have now received a second round of reviews of my paper, “On the Jeffreys-Lindleys paradox” (submitted to Philosophy of Science) and the reports are quite positive (or even extremely positive as in the above quote!). The requests for changes are directed to clarify points, improve the background coverage, and simplify my heavy style (e.g., cutting Proustian sentences). These requests were easily addressed (hopefully to the satisfaction of the reviewers) and, thanks to the week in Warwick, I have already sent the paper back to the journal, with high hopes for acceptance. The new version has also been arXived. I must add that some parts of the reviews sounded much better than my original prose and I was almost tempted to include them in the final version. Take for instance

“As a result, the reader obtains not only a better insight into what is at stake in the JLP, going beyond the results of Spanos (2013) and Sprenger (2013), but also a much better understanding of the epistemic function and mechanics of statistical tests. This is a major achievement given the philosophical controversies that have haunted the topic for decades. Recent insights from Bayesian statistics are integrated into the article and make sure that it is mathematically up to date, but the technical and foundational aspects of the paper are well-balanced.” Anonymous

## Bayes 250th versus Bayes 2.5.0.

Posted in Books, Statistics, Travel, University life with tags ABC, Bayesian non-parametrics, Bernoulli society, Bruno de Finetti, Budapest, Dennis Lindley, EMS 2013, Harold Jeffreys, Hungary, INLA, ISBA, Julian Besag, MCMC, Monte Carlo Statistical Methods, Richard Price, Sharon McGrayne, SMC, Thomas Bayes on July 20, 2013 by xi'an**M**ore than a year ago Michael Sørensen (2013 EMS Chair) and Fabrizzio Ruggeri (then ISBA President) kindly offered me to deliver the memorial lecture on Thomas Bayes at the 2013 European Meeting of Statisticians, which takes place in Budapest today and the following week. I gladly accepted, although with some worries at having to cover a much wider range of the field rather than my own research topic. And then set to work on the slides in the past week, borrowing from my most “historical” lectures on Jeffreys and Keynes, my reply to Spanos, as well as getting a little help from my nonparametric friends *(yes, I do have nonparametric friends!)*. Here is the result, providing a partial (meaning both incomplete and biased) vision of the field.

**S**ince my talk is on Thursday, and because the talk is sponsored by ISBA, hence representing its members, please feel free to comment and suggest changes or additions as I can still incorporate them into the slides…* (Warning, I purposefully kept some slides out to preserve the most surprising entry for the talk on Thursday!)*

## integral priors for binomial regression

Posted in pictures, R, Statistics, University life with tags binomial regression, Harold Jeffreys, MCMC, Monte Carlo Statistical Methods, Murcia, numerical integration, objective Bayes, simulations, Spain on July 2, 2013 by xi'an**D**iego Salmerón and Juan Antonio Cano from Murcia, Spain *(check the movie linked to the above photograph!)*, kindly included me in their recent integral prior paper, even though I mainly provided (constructive) criticism. The paper has just been arXived.

**A** few years ago (2008 to be precise), we wrote together an integral prior paper, published in * TEST*, where we exploited the implicit equation defining those priors (Pérez and Berger, 2002), to construct a Markov chain providing simulations from both integral priors. This time, we consider the case of a binomial regression model and the problem of variable selection. The integral equations are similarly defined and a Markov chain can again be used to simulate from the integral priors. However, the difficulty therein follows from the regression structure, which makes selecting training datasets more elaborate, and whose posterior is not standard. Most fortunately, because the training dataset is exactly the right dimension, a re-parameterisation allows for a simulation of Bernoulli probabilities, provided a Jeffreys prior is used on those. (This obviously makes the “prior” dependent on the selected training dataset, but it should not overly impact the resulting inference.)

## 17 equations that changed the World (#2)

Posted in Books, Statistics with tags 17 equations That Changed the World, BBC, Black and Scoles formula, book review, Dojima rice exchange, Edwin Jaynes, financial crisis, Harold Jeffreys, Henri Poincaré, Ian Stewart, Michelson-Morley, Stephen Wolfram, The Black Swan, The Universe in zero words, Theory of Probability, Vladimir Arnold, wikipedia, xkcd on October 16, 2012 by xi'an*(continuation of the book review)*

“

If you placed your finger at that point, the two halves of the string would still be able to vibrate in the sin 2x pattern, but not in the sin x one. This explains the Pythagorean discovery that a string half as long produced a note one octave higher.” (p.143)

** T**he following chapters are all about Physics: the wave equation, Fourier’s transform and the heat equation, Navier-Stokes’ equation(s), Maxwell’s equation(s)—as in ** The universe in zero word—**, the second law of thermodynamics,

**(of course!), and Schrödinger’s equation. I won’t go so much into details for those chapters, even though they are remarkably written. For instance, the chapter on waves made me understand the notion of harmonics in a much more intuitive and lasting way than previous readings. (This chapter 8 also mentions the “**

*E=mc²**English mathematician Harold Jeffreys*“, while Jeffreys was primarily a geophysicist. And a Bayesian statistician with major impact on the field, his

**arguably being the first modern Bayesian book. Interestingly, Jeffreys also was the first one to find approximations to the Schrödinger’s equation, however he is not mentioned in this later chapter.) Chapter 9 mentions the heat equation but is truly about Fourier’s transform which he uses as a tool and later became a universal technique. It also covers Lebesgue’s integration theory, wavelets, and JPEG compression. Chapter 10 on Navier-Stokes’ equation also mentions climate sciences, where it takes a (reasonable) stand. Chapter 11 on Maxwell’s equations is a short introduction to electromagnetism, with radio the obvious illustration. (Maybe not the best chapter in the book.) Continue reading**

*Theory of Probability*## not only defended but also applied [to appear]

Posted in Books, Statistics, University life with tags Andrew Gelman, arXiv, ASA, discussion paper, Harold Jeffreys, The American Statistician, Theory of Probability, William Feller on June 12, 2012 by xi'an**O**ur paper with Andrew Gelman, *“Not only defended but also applied”: the perceived absurdity of Bayesian inference*, has been reviewed for the second time and is to appear in The American Statistician, as a discussion paper. Terrific news! This is my first discussion paper in The American Statistician (and the second in total, the first one being the re-read of Jeffreys‘ ** Theory of Probability**.)

*[The updated version is now on arXiv.]*