**T**oday’s classics seminar was rather special as two students were scheduled to talk. It was even more special as both students had picked (without informing me) the very same article by Berger and Sellke (1987), *Testing a point-null hypothesis: the irreconcilability of p-values and evidence*, on the (deep?) discrepancies between frequentist p-values and Bayesian posterior probabilities. In connection with the Lindley-Jeffreys paradox. Here are Amira Mziou’s slides:

and Jiahuan Li’s slides:

for comparison.

**I**t was a good exercise to listen to both talks, seeing two perspectives on the same paper, and I hope the students in the class got the idea(s) behind the paper. As you can see, there were obviously repetitions between the talks, including the presentation of the lower bounds for all classes considered by Jim Berger and Tom Sellke, and the overall motivation for the comparison. Maybe as a consequence of my criticisms on the previous talk, both Amira and Jiahuan put some stress on the definitions to formally define the background of the paper. (I love the poetic line: *“To prevent having a non-Bayesian reality”*, although I am not sure what Amira meant by this…)

**I** like the connection made therein with the Lindley-Jeffreys paradox since this is the core idea behind the paper. And because I am currently writing a note about the paradox. Obviously, it was hard for the students to take a more remote stand on the reason for the comparison, from questioning .the relevance of testing point null hypotheses and of comparing the numerical values of a *p*-value with a posterior probability, to expecting asymptotic agreement between a *p*-value and a Bayes factor when both are convergent quantities, to setting the same weight on both hypotheses, to the *ad-hocquery* of using a drift on one to equate the *p*-value with the Bayes factor, to use specific priors like Jeffreys’s (which has the nice feature that it corresponds to *g=n* in the *g*-prior, as discussed in the new edition of *Bayesian Core*). The students also failed to remark on the fact that the developments were only for real parameters, as the phenomenon (that the lower bound on the posterior probabilities is larger than the *p*-value) does not happen so universally in larger dimensions. I would have expected more discussion from the ground, but we still got good questions and comments on a) why 0.05 matters and b) why comparing *p*-values and posterior probabilities is relevant. The next paper to be discussed will be Tukey’s piece on the future of statistics.