Archive for regression

reading classics (#3)

Posted in Statistics, University life with tags , , , , , , , , , , , , on November 15, 2012 by xi'an

Following in the reading classics series, my Master students in the Reading Classics Seminar course, listened today to Kaniav Kamary analysis of Denis Lindley’s and Adrian Smith’s 1972 linear Bayes paper Bayes Estimates for the Linear Model in JRSS Series B. Here are her (Beamer) slides

At a first (mathematical) level this is an easier paper in the list, because it relies on linear algebra and normal conditioning. Of course, this is not the reason why Bayes Estimates for the Linear Model is in the list and how it impacted the field. It is indeed one of the first expositions on hierarchical Bayes programming, with some bits of empirical Bayes shortcuts when computation got a wee in the way. (Remember, this is 1972, when shrinkage estimation and its empirical Bayes motivations is in full blast…and—despite Hstings’ 1970 Biometrika paper—MCMC is yet to be imagined, except maybe by Julian Besag!) So, at secondary and tertiary levels, it is again hard to discuss, esp. with Kaniav’s low fluency in English. For instance, a major concept in the paper is exchangeability, not such a surprise given Adrian Smith’s translation of de Finetti into English. But this is a hard concept if only looking at the algebra within the paper, as a motivation for exchangeability and partial exchangeability (and hierarchical models) comes from applied fields like animal breeding (as in Sørensen and Gianola’s book). Otherwise, piling normal priors on top of normal priors is lost on the students. An objection from a 2012 reader is also that the assumption of exchangeability on the parameters of a regression model does not really make sense when the regressors are not normalised (this is linked to yesterday’s nefarious post!): I much prefer the presentation we make of the linear model in Chapter 3 of our Bayesian Core. Based on Arnold Zellner‘s g-prior. An interesting question from one student was whether or not this paper still had any relevance, other than historical. I was a bit at a loss on how to answer as, again, at a first level, the algebra was somehow natural and, at a statistical level, less informative priors could be used. However, the idea of grouping parameters together in partial exchangeability clusters remained quite appealing and bound to provide gains in precision….

reading classics (#2)

Posted in Statistics, University life with tags , , , , , , , , , , , on November 8, 2012 by xi'an

Following last week read of Hartigan and Wong’s 1979 K-Means Clustering Algorithm, my Master students in the Reading Classics Seminar course, listened today to Agnė Ulčinaitė covering Rob Tibshirani‘s original LASSO paper Regression shrinkage and selection via the lasso in JRSS Series B. Here are her (Beamer) slides

Again not the easiest paper in the list, again mostly algorithmic and requiring some background on how it impacted the field. Even though Agnė also went through the Elements of Statistical Learning by Hastie, Friedman and Tibshirani, it was hard to get away from the paper to analyse more widely the importance of the paper, the connection with the Bayesian (linear) literature of the 70’s, its algorithmic and inferential aspects, like the computational cost, and the recent extensions like Bayesian LASSO. Or the issue of handling n<p models. Remember that one of the S in LASSO stands for shrinkage: it was quite pleasant to hear again about ridge estimators and Stein’s unbiased estimator of the risk, as those were themes of my Ph.D. thesis… (I hope the students do not get discouraged by the complexity of those papers: there were fewer questions and fewer students this time. Next week, the compass will move to the Bayesian pole with a talk on Lindley and Smith’s 1973 linear Bayes paper by one of my PhD students.)

Hidden Markov mixtures of regression

Posted in Statistics with tags , , , , , on December 1, 2009 by xi'an

It took the RSS feed of Bayesian Analysis to disappear from my screen—because the Bayesian Analysis 4(4) issue was completed—for me to spot this very nice paper by Matthew A. Taddy and Athanasios Kottas on Markov switching regression models. It reminds me of earlier papers of mine’s with Monica Billio and Alain Monfort, and with Merrilee Hurn and Ana Justel, on Markov switching and mixtures of regression, respectively. At that time, with Merrilee, we had in mind to extend mixtures of regressions to generalised linear mixtures of generalised linear models but never found the opportunity to concretise the model. The current paper goes much farther by using mixtures of Dirichlet priors, thus giving a semi-parametric flavour to the mixture of regressions. There is also an interesting application to fishery management.

This issue also includes an emotional postnote by Brad Carlin, who is now stepping down from being the Bayesian Analysis Editor-in-chief. Brad unreservedly deserves thanks for mentoring Bayesian Analysis towards a wider audience and a stronger requirement on the papers being published in the journal. I think Bayesian Analysis now is a mainstream journal rather than the emanation of a society, albeit as exciting as ISBA! The electronic format adopted by Bayesian Analysis should be exploited further towards forums and on-line discussions of all papers, rather than singling out one paper by issue, and I am glad Brad agrees on this possible change of editorial policy. All the best to the new Editor-in-chief, Herbie Lee!

Follow

Get every new post delivered to your Inbox.

Join 634 other followers