## reading classics (#3)

**F**ollowing in the reading classics series, my Master students in the Reading Classics Seminar course, listened today to Kaniav Kamary analysis of Denis Lindley’s and Adrian Smith’s 1972 linear Bayes paper *Bayes Estimates for the Linear Model* in JRSS Series B. Here are her (Beamer) slides

**A**t a first (mathematical) level this is an easier paper in the list, because it relies on linear algebra and normal conditioning. Of course, this is not the reason why *Bayes Estimates for the Linear Model* is in the list and how it impacted the field. It is indeed one of the first expositions on hierarchical Bayes programming, with some bits of empirical Bayes shortcuts when computation got a wee in the way. (Remember, this is 1972, when shrinkage estimation and its empirical Bayes motivations is in full blast…and—despite Hstings’ 1970 Biometrika paper—MCMC is yet to be imagined, except maybe by Julian Besag!) So, at secondary and tertiary levels, it is again hard to discuss, esp. with Kaniav’s low fluency in English. For instance, a major concept in the paper is *exchangeability*, not such a surprise given Adrian Smith’s translation of de Finetti into English. But this is a hard concept if only looking at the algebra within the paper, as a motivation for exchangeability and partial exchangeability (and hierarchical models) comes from applied fields like animal breeding (as in Sørensen and Gianola’s book). Otherwise, piling normal priors on top of normal priors is lost on the students. An objection from a 2012 reader is also that the assumption of exchangeability on the parameters of a regression model does not really make sense when the regressors are not normalised (this is linked to yesterday’s nefarious post!): I much prefer the presentation we make of the linear model in Chapter 3 of our Bayesian Core. Based on Arnold Zellner‘s *g*-prior. An interesting question from one student was whether or not this paper still had any relevance, other than historical. I was a bit at a loss on how to answer as, again, at a first level, the algebra was somehow natural and, at a statistical level, less informative priors could be used. However, the idea of grouping parameters together in partial exchangeability clusters remained quite appealing and bound to provide gains in precision….

*Related*

This entry was posted on November 15, 2012 at 12:12 am and is filed under Statistics, University life with tags Adrian Smith, Beamer, Bruno de Finetti, classics, Dennis Lindley, hierarchical Bayesian modelling, Master program, presentation, regression, ridge regression, shrinkage estimation, slides, Université Paris Dauphine. You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site.

### 4 Responses to “reading classics (#3)”

### Leave a Reply Cancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

November 15, 2012 at 5:24 am

So well rounded is your classic series…

November 15, 2012 at 10:11 pm

Thanks! Even though this sounds paradoxical, I should change the “classics” every year, though…

November 16, 2012 at 2:52 am

Perhaps just call it classics in Bayesian statistics, or the like–

November 16, 2012 at 3:29 pm

I do not think this is only for Bayesian statistics, you’re welcome to suggest additions!