Archive for Normal deviate

machine learning as buzzword

Posted in Books, Kids with tags , , , , , on November 12, 2013 by xi'an

In one of his posts, my friend Larry mentioned that popular posts had to mention the Bayes/frequentist opposition in the title… I think mentioning machine learning is also a good buzzword to increase the traffic! I did spot this phenomenon last week when publishing my review of Kevin Murphy’s Machine Learning: the number of views and visitors jumped by at least a half, exceeding the (admittedly modest) 1000 bar on two consecutive days. Interestingly, the number of copies of Machine Learning (sold via my amazon associate link) did not follow this trend: so far, I only spotted a few copies sold, in similar amounts to the number of copies of Spatio-temporal Statistics I reviewed the week before. Or most books I review, positively or negatively! (However, I did spot a correlated increase in overall amazon associate orderings and brazenly attributed the command of a Lego robotic set to a “machine learner”! And as of yesterday Og‘s readers massively ordered 152 236 copies of the latest edition of Andrew’s Bayesian Data Analysis, Thanks!)

a talk with Jay

Posted in Books, Running, Statistics, Travel, University life with tags , , , , , , , , , , on November 1, 2013 by xi'an

IMG_1900I had a wonderful time in CMU, talking with a lot of faculty about their research (and mine), like reminiscing of things past and expanding on things to come with Larry (not to mention exchanging blogging impressions), giving my seminar talk, having a great risotto at Casbah, and a nice dinner at Legume, going for morning runs in the nearby park… One particularly memorable moment was the discussion I had with Jay as/since he went back to our diverging views about objective Bayes and improper priors, as expressed in the last chapter of his book and my review of it. While we kept disagreeing on their relevance and on whether or not they should be used, I had to concede that one primary reason for using reference priors is one of laziness in not seeking expert opinions. Even though there always is a limit to the information provided by such experts that means a default input at one level or the next (of a hierarchical model). Jay also told me of his proposal (as reported in his 1996 Bayesian methods and ethics in a clinical trial design book) for conducting clinical trials with several experts (with different priors) and sequentially weighting them by their predictive success. Proposal which made me think of a sequential way to compare models by their predictive abilities and still use improper priors…

on noninformative priors

Posted in Books, Kids, Statistics with tags , , , , on August 5, 2013 by xi'an

A few weeks ago, Larry Wasserman posted on Normal Deviate an entry on noninformative priors as a lost cause for statistics. I first reacted rather angrily to this post, then decided against posting my reply. After a relaxing week in Budapest, and the prospect of the incoming summer break, I went back to the post and edited it towards more constructive  goals… The post also got discussed by Andrew and Entsophy, generating in each case a heap of heated discussions. (Enjoy your summer, winter is coming!)

Although Larry wrote he wanted to restrain from only posting on Bayesian statistics, he does seem attracted to them like a moth to a candle… This time, it is about the “lost cause of noninformative priors”. While Larry is 200% entitled to post about whatever he likes or dislikes, the post does not really bring new fuel to the debate, if debate there is. First, I think everyone agrees that there is no such thing as a noninformative prior or a prior representing ignorance. (To quote from Jeffreys: “A prior probability used to express ignorance is merely the formal statement of ignorance” (ToP, VIII, x8.1). Every prior brings something into the game and this is reflected in the posterior inference. Sometimes, the impact is enormous and we may be unaware of it. Take for instance Bayesian nonparametrics. It is thus essential to keep this in mind. (And to keep calm!) Which does not mean we should not use them. Indeed, noninformative priors are a way of setting a reference measure, from which one can start evaluating the impact of picking this or that prior. Just a measure. (No-one gets emotional when hearing the Lebesgue measure mentioned, right?!) And if the reference prior is a σ-finite measure, one cannot even put a meaning to events like θ>0. This reference measure is required to set the Bayesian crank turning, here or there depending on one’s prior beliefs or information. If we reject those reference priors for accepting only the cases when the prior is provided along with the data and the model, I think everyone is a Bayesian. Even Feller. Even Larry (?).

Second, there is alas too much pathos or unintended meaning put in names like noninformative, ignorance, objective, &tc. And this may be the major message in Larry’s post. We should call those reference priors Linear A priors in reference to the mostly undeciphered Minoan alphabet. Or whatever name with no emotional content whatsoever in order not to drive people crazy. Noninformative is not even a word, to start with… And I dunno how to define ignorance in a mathematical manner.Once more in connection with the EMS 2013 meeting in Budapest, I do not see why one should object more to reference priors than to the so-called “subjective” priors, as the former provide a baseline against which to test the latter, using e.g. Xiao Li’s approach. I am actually much more annoyed by the use of a specific proper prior in a statistical analysis when this prior is neither justified nor assessed in terms of robustness. And I see nothing wrong in establishing either asymptotic or frequentist properties about some procedures connected with some of those reference priors: I became a Bayesian this way, after all.

Keep Calm and CARRY ON PosterAnyway, have a nice (end of the) summer if you are in the Northern Hemisphere, and expect delays (or snapshots!) on the ‘Og for the coming fortnight…