Archive for least squares

Bayesian propaganda?

Posted in Books, Kids, pictures, Statistics, University life with tags , , , , , , , , , on April 20, 2015 by xi'an

“The question is about frequentist approach. Bayesian is admissable [sic] only by wrong definition as it starts with the assumption that the prior is the correct pre-information. James-Stein beats OLS without assumptions. If there is an admissable [sic] frequentist estimator then it will correspond to a true objective prior.”

I had a wee bit of a (minor, very minor!) communication problem on X validated, about a question on the existence of admissible estimators of the linear regression coefficient in multiple dimensions, under squared error loss. When I first replied that all Bayes estimators with finite risk were de facto admissible, I got the above reply, which clearly misses the point, and as I had edited the OP question to include more tags, the edited version was reverted with a comment about Bayesian propaganda! This is rather funny, if not hilarious, as (a) Bayes estimators are indeed admissible in the classical or frequentist sense—I actually fail to see a definition of admissibility in the Bayesian sense—and (b) the complete class theorems of Wald, Stein, and others (like Jack Kiefer, Larry Brown, and Jim Berger) come from the frequentist quest for best estimator(s). To make my point clearer, I also reproduced in my answer the Stein’s necessary and sufficient condition for admissibility from my book but it did not help, as the theorem was “too complex for [the OP] to understand”, which shows in fine the point of reading textbooks!

probit posterior mean

Posted in Statistics, University life with tags , , , on March 9, 2012 by xi'an

In a recent arXiv report, Yuzo Maruyma shows that the posterior expectation of a probit parameter has an almost closed form (under a flat prior), namely

\mathbb{E}[\beta|X,y] = (X^TX)^{-1} X^T\{2\text{diag}(y)-I_n\}\omega(X,y)

where ω involves the integration of two quadratic forms over the n-dimensional unit sphere. Although this does not help directly with the MCMC derivation of the full posterior, this is an interesting lemma which shows a closed proximity with the standard least square estimate in linear regression.