Just received an email from the IMS that Sir David Cox (Nuffield College, Oxford) has been awarded the International Prize in Statistics. As discussed earlier on the ‘Og, this prize is intended to act as the equivalent of a Nobel prize for statistics. While I still have reservations about the concept. I have none whatsoever about the nomination as David would have been my suggestion from the start. Congratulations to him for the Prize and more significantly for his massive contributions to statistics, with foundational, methodological and societal impacts! [As Peter Diggle, President of the Royal Statistical Society just pointed out, it is quite fitting that it happens on European Statistics day!]
Archive for IMS
After ABC in Paris in 2009, ABC in London in 2011, and ABC in Roma last year, things are accelerating since there will be—as I just learned— an ABC in Sydney next July (not June as I originally typed, thanks Robin!). The workshop on the current developments of ABC methodology thus leaves Europe to go down-under and to take advantage of the IMS Meeting in Sydney on July 7-10, 2014. Hopefully, “ABC in…” will continue its tour of European capitals in 2015! To keep up with an unbroken sequence of free workshops, Scott Sisson has managed to find support so that attendance is free of charge (free as in “no registration fee at all”!) but you do need to register as space is limited. While I would love to visit UNSW and Sydney once again and attend the workshop, I will not, getting ready for Cancún and our ABC short course there.
Xiao-Li Meng asked this question in his latest XL column, to which Andrew replied faster than I. And in the same mood as mine. I had taken part to a recent discussion on this topic within the IMS Council, namely whether or not the IMS should associate with other organisations like ASA towards funding and supporting this potential prize. My initial reaction was one of surprise that we could consider mimicking/hijacking the Nobel for our field. First, I dislike the whole spirit of most prizes, from the personalisation to the media frenzy and distortion, to the notion that we could rank discoveries and research careers within a whole field. And separate what is clearly due to a single individual from what is due to a team of researchers.
Being clueless about those fields, I will not get into a discussion of who should have gotten a Nobel Prize in medicine, physics, or chemistry. And who should not have. But there are certainly many worthy competitors to the actual winners. And this is not the point: I do not see how any of this fights the downfall of scientific students in most of the Western World. That is, how a teenager can get more enticed to undertake maths or physics studies because she saw a couple old guys wearing weird clothes getting a medal and a check in Sweden. I have no actual data, but could Xiao-Li give me a quantitative assessment of the fact that Nobel Prizes “attract future talent”? Chemistry departments keep closing for lack of a sufficient number of students, (pure) maths and physics departments threatened with the same fate… Even the Fields Medal, which has at least the appeal of being delivered to younger researchers, does not seem to fit Xiao-Li’s argument. (To take a specific example: The recent Fields medallist Cédric Villani is a great communicator and took advantage of his medal to promote maths throughout France, in conferences, the medias, and by launching all kinds of initiative. I still remain sceptical about the overall impact on recruiting young blood in maths programs [again with no data to back up my feeling).) I will even less mention Nobel prizes for literature and peace, as there clearly is a political agenda in the nomination. (And selecting Sartre for the Nobel prize for literature definitely discredited it. At least for me.)
“…the media and public have given much more attention to the Fields Medal than to the COPSS Award, even though the former has hardly been about direct or even indirect impact on everyday life.” XL
Well, I do not see this other point of Xiao-Li’s. Nobel prizes are not prestigious for their impact on society, as most people do not understand at all what the rewarded research (career) is about. The most extreme example is the Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel: On the one hand, Xiao-Li is right in pointing out that this is a very successful post-Alfred creation of a “Nobel Prize”. On the other hand, the fact that some years see two competing theories simultaneously win leads me to consider that this prize gives priority to theoretical construct above any impact on the World’s economy. Obviously, this statement is a bit of shooting our field in the foot since the only statisticians who got a Nobel Prize are econometricians and game-theorists! Nonetheless, it also shows that the happy few statisticians who entered the Nobel Olympus did not bring a bonus to the field… I am thus remaining my usual pessimistic self on the impact of a whatever-company Prize in Statistical Sciences in Memory of Alfred Nobel.
Another remark is the opposition between the COPSS Award, which remains completely ignored by the media (despite a wealth of great nominees with various domains of achievements) and the Fields Medal (which is not ignored). This has been a curse of Statistics that has been discussed at large, namely the difficulty to separate what is math and what is outside math within the field. The Fields Medal is clearly very unlikely to nominate a statistician, even a highly theoretical statistician, as there will always be “sexier” maths results, i.e. corpora of work that will be seen as higher maths than, say, the invention of the Lasso or the creation of generalized linear models. So there is no hope to reach for an alternative Fields Medal with the same shine. Just like the Nobel Prize.
Other issues I could have mentioned, but for the length of the current rant, are the creation of rewards for solving a specific problem (as some found in Machine Learning), for involving multidisciplinary and multicountry research teams, and for reaching new orders of magnitude in processing large data problems.
The programs of the talks, posters and workshop are now printed and available on Speaker Deck (talks, posters, workshop). Please let me know if you spot anything wrong (even though it will not be reprinted!). This is presumably the last news item till Jan. 5 as I am almost off to Chamonix for a week of
hard work preparing the conference happily skiing with my family, looking forward seeing some participants in the coming week in the streets of Chamonix and all of them/you on Monday, Jan. 6, 8:30! (Snow is falling right now, so there should be no issue with finding open runs…)
|itle:||Automated variable selection for ABC algorithms|
|Abstract:||We discuss here recent advances made in the selection of summaries for approximate Bayesian computation (ABC). In particular, we emphasize the appeal of using machine learning tools such as random forests to build in an automated version summary statistics of a minimum dimension. Conditional to sufficient progress being made in this direction, we will also discuss why and how ABC methods have to be adapted when analyzing large molecular datasets and will present some progress concerning Single Nucleotide Polymorphism (SNP) data.|
|Key words:||Bayesian computation, ABC, SNP, model selection|
“I still feel that too much of academic statistics values complex mathematics over elegant simplicity — it is necessary for a research paper to be complicated in order to be published.” Roderick Little, JASA, p.359
Roderick Little wrote his Fisher lecture, recently published in JASA, around ten simple ideas for statistics. Its title is “In praise of simplicity not mathematistry! Ten simple powerful ideas for the statistical scientist”. While this title is rather antagonistic, blaming mathematical statistics for the rise of mathematistry in the field (a term borrowed from Fisher, who also invented the adjective ‘Bayesian’), the paper focus on those 10 ideas and very little on why there is (would be) too much mathematics in statistics:
- Make outcomes univariate
- Bayes rule, for inference under an assumed model
- Calibrated Bayes, to keep inference honest
- Embrace well-designed simulation experiments
- Distinguish the model/estimand, the principles of estimation, and computational methods
- Parsimony — seek a good simple model, not the “right” model
- Model the Inclusion/Assignment and try to make it ignorable
- Consider dropping parts of the likelihood to reduce the modeling part
- Potential outcomes and principal stratification for causal inferenc
- Statistics is basically a missing data problem
“The mathematics of problems with infinite parameters is interesting, but with finite sample sizes, I would rather have a parametric model. “Mathematistry” may eschew parametric models because the asymptotic theory is too simple, but they often work well in practice.” Roderick Little, JASA, p.365
Both those rules and the illustrations that abund in the paper are reflecting upon Little’s research focus and obviously apply to his model in a fairly coherent way. However, while a mostly parametric model user myself, I fear the rejection of non-parametric techniques is far too radical. It is more and more my convinction that we cannot handle the full complexity of a realistic structure in a standard Bayesian manner and that we have to give up on the coherence and completeness goals at some point… Using non-parametrics and/or machine learning on some bits and pieces then makes sense, even though it hurts elegance and simplicity.
“However, fully Bayes inference requires detailed probability modeling, which is often a daunting task. It seems worth sacrifycing some Bayesian inferential purity if the task can be simplified.” Roderick Little, JASA, p.366
I will not discuss those ideas in detail, as some of them make complete sense to me (like Bayesian statistics laying its assumptions in the open) and others remain obscure (e.g., causality) or with limited applicability. It is overall a commendable Fisher lecture that focus on methodology and the practice of statistical science, rather than on theory. I however do not see the reason why maths should be blamed for this state of the field. Nor why mathematical statistics journals like AoS would carry some responsibility in the lack of further applicability in other fields. Students of statistics do need a strong background in mathematics and I fear we are losing ground in this respect, at least judging by the growing difficulty in finding measure theory courses abroad for our exchange undergradutes from Paris-Dauphine. (I also find the model misspecification aspects mostly missing from this list.)
While I was editing our “famous” In praise of the referee paper—well, famous for being my most rejected paper ever!, with one editor not even acknowledging receipt!!—for the next edition of the ISBA Bulletin—where it truly belongs, being in fine a reply to Larry’s tribune therein a while ago—, Dimitris Politis had written a column for the IMS Bulletin—March 2013 Issue, page 11—on Refereeing and psychoanalysis.
Uh?! What?! Psychoanalysis?! Dimitris’ post is about referees being rude or abusive in their report, expressing befuddlement at seeing such behaviour in a scientific review. If one sets aside cases of personal and ideological antagonisms—always likely to occur in academic circles!—, a “good” reason for referees to get aggressively annoyed to the point of rudeness is sloppiness of one kind or another in the paper under review. One has to remember that refereeing is done for free and with no clear recognition in the overwhelming majority of cases, out of a sense of duty to the community and of fairness for having our own papers refereed. Reading a paper where typos abound, where style is so abstruse as to hide the purpose of the work, where the literature is so poorly referenced as to make one doubts the author(s) ever read another paper, the referee may feel vindicated by venting his/her frustration at wasting one’s time by writing a few vitriolic remarks. Dimitris points out this can be very detrimental to young researchers. True, but what happened to the advisor at this stage?! Wasn’t she/he supposed to advise her/his PhD student not only in conducting innovative research but also in producing intelligible outcome and in preparing papers suited for the journal it is to be submitted to..?! Being rude and aggressive does not contribute to improve the setting, no more than headbutting an Italian football player helps in winning the World Cup, but it may nonetheless be understood without resorting to psychoanalysis!
Most interestingly, this negative aspect of refereeing—that can be curbed by posterior actions of AEs and editors—would vanish if some of our proposals were implemented, incl. making referee’ reports part of the referee’s publication list, making those reports public as comments on the published paper (if published), and creating repositories or report commons independent from journals…