Search Results

comments on Watson and Holmes

April 1, 2016

“The world is full of obvious things which nobody by any chance ever observes.” The Hound of the Baskervilles In connection with the incoming publication of James Watson’s and Chris Holmes’ Approximating models and robust decisions in Statistical Science, Judith Rousseau and I wrote a discussion on the paper that has been arXived yesterday. “Overall, […]

a computational approach to statistical learning [book review]

April 15, 2020

This book was sent to me by CRC Press for review for CHANCE. I read it over a few mornings while [confined] at home and found it much more computational than statistical. In the sense that the authors go quite thoroughly into the construction of standard learning procedures, including home-made R codes that obviously help […]

Why should I be Bayesian when my model is wrong?

May 9, 2017

Guillaume Dehaene posted the above question on X validated last Friday. Here is an except from it: However, as everybody knows, assuming that my model is correct is fairly arrogant: why should Nature fall neatly inside the box of the models which I have considered? It is much more realistic to assume that the real […]

nonparametric Bayesian clay for robust decision bricks

January 30, 2017

Just received an email today that our discussion with Judith of Chris Holmes and James Watson’s paper was now published as Statistical Science 2016, Vol. 31, No. 4, 506-510… While it is almost identical to the arXiv version, it can be read on-line.

ISBA 2016 [#6]

June 19, 2016

Fifth and final day of ISBA 2016, which was as full and intense as the previous ones. (Or even more if taking into account the late evening social activities pursued by most participants.) First thing in the morning, I managed to get very close to a hill top, thanks to the hints provided by Jeff […]