Search Results

comments on Watson and Holmes

April 1, 2016

“The world is full of obvious things which nobody by any chance ever observes.” The Hound of the Baskervilles In connection with the incoming publication of James Watson’s and Chris Holmes’ Approximating models and robust decisions in Statistical Science, Judith Rousseau and I wrote a discussion on the paper that has been arXived yesterday. “Overall, […]

Why should I be Bayesian when my model is wrong?

May 9, 2017

Guillaume Dehaene posted the above question on X validated last Friday. Here is an except from it: However, as everybody knows, assuming that my model is correct is fairly arrogant: why should Nature fall neatly inside the box of the models which I have considered? It is much more realistic to assume that the real […]

nonparametric Bayesian clay for robust decision bricks

January 30, 2017

Just received an email today that our discussion with Judith of Chris Holmes and James Watson’s paper was now published as Statistical Science 2016, Vol. 31, No. 4, 506-510… While it is almost identical to the arXiv version, it can be read on-line.

ISBA 2016 [#6]

June 19, 2016

Fifth and final day of ISBA 2016, which was as full and intense as the previous ones. (Or even more if taking into account the late evening social activities pursued by most participants.) First thing in the morning, I managed to get very close to a hill top, thanks to the hints provided by Jeff […]

ISBA 2016 [#3]

June 16, 2016

Among the sessions I attended yesterday, I really liked the one on robustness and model mispecification. Especially the talk by Steve McEachern on Bayesian inference based on insufficient statistics, with a striking graph of the degradation of the Bayes factor as the prior variance increases. I sadly had no time to grab a picture of […]