It has been about a week since I left the hospital and went back home, trying to get back in shape by resting, eating (to gain back some of the lost kg’s), sharing with my family and exercising… I foolishly tried to get back to the university once and ended the day as a wreck (esp. as I had to walk the two k’s of avenue Foch, the Line 2 métro being out of order!). Anyway, I read a lot, went back to my favourite bakery in Sceaux, had chats with neighbours, got reunited with the stray cat, and enjoyed the May sunshine while it lasted. I want to take this opportunity to give my warmest thanks to all of you who sent me greetings and good wishes, who visited me at the hospital or sent me goodies—read all the books, ate most of the macaroons and chocolates! A very special thanks to my friends in the Statistics department at BYU, for their unbelievable support! And to my mom, who came every single day… As reported in the earlier post, the thumb is gone and the wound is slowly healing, although it will require several weeks before the dressings are off for good. (Which gives me a good reason to skip washing dishes!) I dearly hope I will get the green light from the surgeon (tomorrow) for attending the i-like workshop next Wednesday!
Archive for Université Paris Dauphine
Slides (in French) of a presentation of my Master TSI in ENSAE today:
My former student Pierre Jacob (now at NUS in Singapore and soon in Oxford, England) got the 2012 PhD thesis Jacques Neveu prize. (Jacques Neveu, French probabilist specialist of Markov chains, was also the founder of the SMAI Probability and Statistics branch, which is why this prize is named after him. SMAI is the French version of SIAM.) As a coincidence, Pierre also got the PhD prize of Fondation Dauphine last week… Great news! And well-deserved rewards.
Today was my last Reading Seminar class and the concluding paper chosen by the student was Tukey’s “The future of data analysis“, a 1962 Annals of Math. Stat. paper. Unfortunately, reading this paper required much more maturity and background than the student could afford, which is the reason why this last presentation is not posted on this page… Given the global and a-theoretical perspective of the paper, it was quite difficult to interpret without further delving into Tukey’s work and without a proper knowledge of what was Data Analysis in the 1960′s. (The love affair of French statisticians with data analysis was then at its apex, but it has very much receded since then!) Being myself unfamiliar with this paper, and judging mostly from the sentences pasted by the student in his slides, I cannot tell how much of the paper is truly visionary and how much is cheap talk: focussing on trimmed and winsorized means does not sound like offering a very wide scope for data analysis… I liked the quote “It’s easier to carry a slide rule than a desk computer, to say nothing of a large computer”! (As well as the quote from Azimov “The sound of panting“…. (Still, I am unsure I will keep the paper within the list next year!)
Overall, despite a rather disappointing lower tail of the distribution of the talks, I am very happy with the way the seminar proceeded this year and the efforts produced by the students to assimilate the papers, the necessary presentation skills including building a background in LaTeX and Beamer for most students. I thus think almost all students will pass this course and do hope those skills will be profitable for their future studies…
Today’s classics seminar was rather special as two students were scheduled to talk. It was even more special as both students had picked (without informing me) the very same article by Berger and Sellke (1987), Testing a point-null hypothesis: the irreconcilability of p-values and evidence, on the (deep?) discrepancies between frequentist p-values and Bayesian posterior probabilities. In connection with the Lindley-Jeffreys paradox. Here are Amira Mziou’s slides:
and Jiahuan Li’s slides:
It was a good exercise to listen to both talks, seeing two perspectives on the same paper, and I hope the students in the class got the idea(s) behind the paper. As you can see, there were obviously repetitions between the talks, including the presentation of the lower bounds for all classes considered by Jim Berger and Tom Sellke, and the overall motivation for the comparison. Maybe as a consequence of my criticisms on the previous talk, both Amira and Jiahuan put some stress on the definitions to formally define the background of the paper. (I love the poetic line: “To prevent having a non-Bayesian reality”, although I am not sure what Amira meant by this…)
I like the connection made therein with the Lindley-Jeffreys paradox since this is the core idea behind the paper. And because I am currently writing a note about the paradox. Obviously, it was hard for the students to take a more remote stand on the reason for the comparison, from questioning .the relevance of testing point null hypotheses and of comparing the numerical values of a p-value with a posterior probability, to expecting asymptotic agreement between a p-value and a Bayes factor when both are convergent quantities, to setting the same weight on both hypotheses, to the ad-hocquery of using a drift on one to equate the p-value with the Bayes factor, to use specific priors like Jeffreys’s (which has the nice feature that it corresponds to g=n in the g-prior, as discussed in the new edition of Bayesian Core). The students also failed to remark on the fact that the developments were only for real parameters, as the phenomenon (that the lower bound on the posterior probabilities is larger than the p-value) does not happen so universally in larger dimensions. I would have expected more discussion from the ground, but we still got good questions and comments on a) why 0.05 matters and b) why comparing p-values and posterior probabilities is relevant. The next paper to be discussed will be Tukey’s piece on the future of statistics.